The Role of Constraints in Hebbian Learning
نویسندگان
چکیده
Models of unsupervised correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical e ects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, leads to a nal state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This nal state is often dominated by weight con gurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a \graded" receptive eld in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive eld that is \sharpened" to a few maximally-correlated inputs. If two equivalent input populations (e.g. two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. An approach to understanding constraints over input and over output cells is suggested, and some biological implementations are discussed. 1 Development in many neural systems appears to be guided by \Hebbian" or similar activitydependent, correlation-based rules of synaptic modi cation (reviewed in [25]). Several lines of reasoning suggest that constraints limiting available synaptic resources may play an important role in this development. Experimentally, such development often appears to be competitive. That is, the fate of one set of inputs depends not only on its own patterns of activity, but on the activity patterns of other, competing inputs. A classic example is given by the experiments of Wiesel and Hubel on the e ects of monocular versus binocular visual deprivation in young animals ([42]; see also [6]). If neural activity is reduced in one eye, inputs responding to that eye lose most of their connections to the visual cortex, while the inputs responding to the normally active, opposite eye gain more than their normal share of connections. If activity is reduced simultaneously in both eyes for a similar period of time, normal development results: each eye's inputs retain their normal cortical innervation. Such competition appears to yield a roughly constant nal total strength of innervation regardless of the patterns of input activity, although the distribution of this innervation among the inputs depends upon neural activities. There is no direct biological evidence for the existence of competitive limits on the total synaptic strength supported by a cell. However, some support for the idea that a cell's synaptic resources are limited comes from biological evidence for intrinsic limits to the total number of synapses supported by a cell. In the retinotectal system in several species, if retinal cells are forced to innervate only a partial tectum, rather than their usual target of the entire tectum, the number of synapses per tectal cell remains normal [8, 31, 33]. This suggests that there are a limited number of postsynaptic sites per target cell, for which incoming neurons must compete. If inputs are given a larger than normal target structure, they innervate target cells with less than normal density, indicating intrinsic limits in the total presynaptic innervation by an input cell [9] (this also occurs in other systems [3, 4, 37]). In several systems, synapse number appears to be conserved regardless of the patterns of neural activity, even while synaptic rearrangements occur that vary with neural activity patterns. These systems include the monkey visual cortex during ocular dominance column development [2] and the gold sh optic tectum during post-regeneration map re nement [11, 10]. The existence of constraints limiting synaptic resources is also suggested on theoretical grounds. Development under simple correlation-based rules of synaptic modi cation typically leads to instability. Either all synapses grow to the maximum allowed value, or all synapses decay to zero strength. To achieve the results found biologically, a Hebbian rule must instead lead to the development of selectivity, so that some synaptic patterns grow in strength while others correspondingly shrink. One theoretical method of achieving selectivity is to use constraints to enforce a competition. Von der Malsburg [22] rst proposed that constraints conserving the total synaptic strength supported by each input or output cell could be used to stabilize a Hebbian rule, a proposal also made by Perez et al. [34]. Much earlier, Rochester et al. [35] used a model that conserved total 2 synaptic strength over all synapses in a Hebbian network. Rosenblatt [36] proposed a synaptic strength conservation rule in the context of perceptron learning. Subsequently, many authors have used one form or another of similar constraints, e.g. [5, 17, 18, 28, 32, 41]. A constraint that conserves total synaptic strength over a cell can be enforced through nonspeci c decay of all synaptic strengths, provided the rate of this decay is set for the cell as a whole to cancel the total increase due to speci c, Hebbian plasticity. Two simple types of decay can be considered. First, each synapse might decay at a rate proportional to its current strength; this is called multiplicative decay. Alternatively, all synapses might decay at an equal rate; this is called subtractive decay. The message of this paper is that the dynamical e ects of a constraint depend signi cantly on whether it is enforced via multiplicative or subtractive decay. We have noted this brie y in previous work [20, 21, 25, 28, 29]. Intuitively, the di erence between these two methods is that multiplicative enforcement of a constraint (a \multiplicative constraint") suppresses the growth of all patterns of synaptic weights, while subtractive enforcement (a \subtractive constraint") only suppresses patterns whose growth would violate the constraint. In each short time t, a multiplicative constraint multiplies all weights, or equivalently all weight patterns, by a constant. If the unconstrained dynamics are linear in the synaptic strengths, the dynamical equations are invariant under such a rescaling. Then the dynamics under the multiplicative constraint are equivalent to those of unbounded linear growth. So the synaptic pattern that would grow the fastest in the absence of the constraint (the principal eigenvector of the operator determining unconstrained development) will eventually emerge, while all other components will eventually be suppressed. In contrast, in each short time, a subtractive constraint subtracts a small constant from all weights. This selectively alters the amplitude of weight patterns whose growth would violate the constraint | those that have a nonzero total synaptic strength | while leaving the amplitude of all other weight patterns una ected. In this paper, we analyze the e ects of subtractive and multiplicative constraints on the dynamics of linear learning rules. We then brie y discuss some theoretical and biological methods of implementing constraints. 1 Simple Examples of the E ects of Constraints A few simple examples will illustrate that strikingly di erent outcomes can result from the subtractive or multiplicative enforcement of a constraint. The remainder of the paper will present a systematic analysis of these di erences. Consider synaptic plasticity on a single postsynaptic cell. Let w be the vector of synaptic weights onto this cell; the ith component, wi, is the synaptic weight from the ith input. We assume synaptic weights are initially randomly distributed with mean winit, and are limited to remain between a maximum value wmax and minimum value wmin. We consider the e ect of a constraint that conserves the total synaptic strength, Pi wi, implemented either multiplicatively 3 or subtractively. Consider rst a simple equation for Hebbian synaptic plasticity, d dtw = Cw, where C is a matrix describing correlations among input activities [18, 20, 28, 32]. Suppose this correlation is a Gaussian function of the separation of two inputs. We assume rst that wmin=0. The nal outcomes of development under this equation are shown in Fig. 1A. With no constraint (column 1), all synapses saturate at wmax, so all selectivity in the cell's response is lost. Under a multiplicative constraint (column 2), synaptic strengths decrease gradually from center to periphery. The nal synaptic pattern in this case is proportional to the principal eigenvector of C. Under a subtractive constraint (columns 3 and 4), a central core of synapses saturate at or near strength wmax, while the remaining synapses saturate at wmin. If wmax is increased, or the total conserved synaptic strength wtot is decreased by decreasing winit, the receptive eld is sharpened under a subtractive constraint (column 4). The nal number of nonzero synapses in this case (with wmin = 0) is approximately wtot=wmax. In contrast, under the multiplicative constraint or without constraints, the shape of the nal receptive eld is unaltered by such changes in wmax and wtot. If wmin is decreased below zero, center-surround receptive elds can result under subtractive constraints (Fig. 1B). Results under multiplicative constraints or without constraints are unaltered by this change. This mechanism of developing center-surround receptive elds underlies the results of Linsker [18], as explained in [20, 21] and in section 2.4. Again, an increase in wmax or decrease in wtot leads to sharpening of the positive part of the receptive eld under subtractive constraints (column 4). Next, consider ocular dominance segregation [28] (Fig. 1C). We suppose the output cell receives two equivalent sets of inputs: left-eye inputs and right-eye inputs. A Gaussian correlation function C describes correlations within each eye as before, while between the two eyes there is zero correlation; and wmin=0. Now results are much as in (A), but a new distinction emerges. Under subtractive constraints, ocular dominance segregation occurs: the output cell becomes monocular, receiving input from only a single eye. Under multiplicative constraints, there is no ocular dominance segregation: the two eyes develop equal innervations to the output cell. Segregation can only occur under multiplicative constraints if there are anticorrelations between the two eyes, as will be explained in section 2.5. Finally, consider the problem of map alignment (Fig. 1D). In the barn owl optic tectum, an auditory map comes to be aligned with a visual map under the guidance of visual experience [16]. Consider a toy model of this, as follows. Two input maps innervate the tectum. One map (the auditory) is adaptive, while the other map (the visual) is xed and serves as a teacher to the auditory map. We consider a single output cell in the tectum, with activity y. Let w be the auditory map's synaptic weight vector, and xn its vector of input activities corresponding to the nth stimulus. Let sn be the synaptic input from the visual map in response to the same stimulus: the visual map is topographic, so the largest responses sn will come from stimuli in a particular topographic position 4 UNCONSTRAINED MULTIPLICATIVE SUBTRACTIVE SUBTRACTIVE(2) (A) Hebbian (B) Hebbian Wmin < 0 (C) Two Eyes (D) Teacher Left Right Initial Weights (A),(B),(C)L,(D) (C)R C (Correlation Fctn) or t (Teacher Signal) Figure 1: Outcomes of Development Without Constraints and Under Multiplicative and Subtractive Constraints (A), (B): Outcome of a simple Hebbian development equation: unconstrained equation is d dtw=Cw. Initial synaptic weights are shown at the top left. The correlation matrix C is a Gaussian function of the separation between two synapses (shown at top right). (A): wmin=0; (B): wmin= 2. (C): Outcome of a similar equation but with two identical sets of inputs, representing leftand right-eye inputs. Within each eye, correlations are the same as in (A); between the eyes there is zero correlation. Unconstrained equations are d dtwL=CwL; d dtwR=CwR. (D). Outcome of a simpli ed model of the alignment of one map with another: a Hebbian rule in the limit in which the output cell's activity is determined by a \teacher signal". Unconstrained equation is d dtw=t (see text). The constant vector t is a Gaussian centered at the center of the arbor (shown at top right). Note that the resulting weights under multiplicative constraints are proportional to t. All results are from simulations of a two-dimensional receptive eld consisting of a diameter-13 circle of inputs drawn from a 13 by 13 square grid. The resulting receptive elds were approximately circularly symmetric; the gures show a slice horizontally through the center of the eld. All simulations used wmax=8; all except (B) used wmin=0. The left three columns show results for winit=1. The right column (subtractive(2)) uses winit=0:5, which halves the conserved total synaptic strength wtot. 5 in visual space. Suppose a simple activation rule, y=w x + s, is combined with a Hebbian rule, d dtwi = hyxii. The brackets indicate an average over input patterns. In the limit in which the visual, teacher signal is dominant, the Hebbian rule becomes d dtw=hsxi. Here the right-hand side is a constant vector, t hsxi, which measures the correlation between the auditory inputs and the visual teacher input. We consider the case in which there is initially a rough auditory map, so that the correlation of auditory inputs with the visual teacher decreases as a Gaussian from the center of the arbor. In the absence of constraints, all synapses again reach wmax, so no selectivity develops. Multiplicative constraints lead the weights to match the correlations, giving a moderate advantage to the best-correlated inputs. Subtractive constraints lead to a sharpening of the map so that only the best correlated inputs survive. In summary, unconstrained Hebbian equations often lead all synapses to saturate at the maximal allowed value, destroying selectivity. Multiplicative constraints then lead the inputs to develop graded strengths. Subtractive constraints lead synapses to saturate at either the maximal or minimal allowed value, and can result in a sharpening to a few best-correlated inputs. They also can allow ocular dominance segregation to develop in circumstances where multiplicative constraints do not. These di erences between subtractive and multiplicative constraints are easily understood, as we now show. 2 Multiplicative and Subtractive Constraints for a Single Output Cell We begin with a general linear synaptic plasticity equation without decays, d dtw(t)=Cw(t). C is a matrix that drives the dynamics. We assume that C has a complete set of orthonormal eigenvectors ea with corresponding eigenvalues a (that is, Cea= aea), as occurs if C is symmetric. In Hebbian learning, C is a correlation matrix: Cij represents the correlation in activity between inputs i and j, so C is symmetric.1 Typically most or all of the eigenvalues of C are positive; for example, if C is the covariance matrix of the input activities then all its eigenvalues are positive. We use indices i; j to refer to the synaptic basis, and a; b to refer to the eigenvector basis. The strength of the ith synapse is denoted by wi. The weight vector w can also be written as a combination of the eigenvectors, w=Pawaea, where the components of w in the eigenvector basis are wa=w ea. We assume as before that the dynamics are linear up to hard limits on the synaptic weights, wmin wi(t) wmax; we will not explicitly note these limits in subsequent equations. As discussed in the introduction, a constraint can be enforced through time-varying synaptic decay, where the rate of the decay varies to maintain the constraint. Before studying constraints, 1We work in a representation in which each synapse is represented explicitly, and the density of synapses is implicit. Equivalently, the dynamics may be written in a representation in which the synaptic density or \arbor" function is explicitly represented. The current analysis also applies in these representations, as described in Appendix B. 6 it is instructive to review the e ects of constant decay terms. 2.1 The e ect of constant decay terms Inclusion of synaptic decay results in the equation d dtw(t) = Cw n w (1) or, in index notation, d dtwi(t)=Pj Cijwj ni wi. The term n is a subtractive decay term; n is a constant vector, for example n= (1; 1; : : : ; 1)T if all synapses decay at the same rate. The term w is a multiplicative decay term. To examine the e ect of these decay terms, we work in the eigenvector basis. Write the initial condition of Eq. 1 as w(t=0)=Pa w0 aea. Write the xed point of the equation, where d dtw=0, as wFP=PawFP a ea. Then the solution to Eq. 1 can be written w(t) wFP =Xa ea(w0 a wFP a )e( a )t (2) We can compute2 wFP= (C 1) 1n=Pa ea n a ea, so wFP a = ea n a . This solution illustrates the basic di erence explained in the introduction between multiplicative and subtractive decay terms. The multiplicative decay term lowers the growth rate of all eigenvectors by the same amount , thus reducing smaller eigenvalues by a greater proportion. This favors the principal eigenvector. The subtractive term moves the xed point of the equation, wFP. This alters the growth rate only of eigenvectors ea for which wFP a 6= 0; these are the eigenvectors with non-zero component in the direction n, that is with ea n 6= 0. That is, constant subtractive decay only alters growth in directions coupled to n, and leaves growth unaltered in all directions perpendicular to n. Note that neither type of constant decay achieves selectivity: all patterns with a > grow exponentially until the limits on synaptic weights are reached, and all synapses typically reach wmax if the leading eigenvector has no changes of sign. To achieve selectivity, a constraint is needed. 2.2 Formulation of multiplicative and subtractive constraints We now replace the decay terms above by either a multiplicative or a subtractive constraint term. By multiplicative or subtractive constraints, respectively, we refer to time-varying decay terms (t)w or (t)n that move w, after application of C, toward a constraint surface. We assume and are determined by the current weight vector w(t) and do not otherwise depend on t, so we write 2We have assumed here that n has no components in the nullspace of C 1, so that C 1 1n is well-de ned. This is easily generalized. 7 them as (w) or (w). Thus, the constrained equations are3 d dtw(t) = Cw(t) (w)w(t) (Multiplicative Constraints) (3) d dtw(t) = Cw(t) (w)n (Subtractive Constraints) (4) The vector n is a constant. For example, if all synapses have equal subtractive decay rate, then n=(1; 1; : : : ; 1)T in the synaptic basis; we refer to this vector, (1; 1; : : : ; 1)T, as the DC vector. Multiplicative or subtractive constraints represent two methods of enforcing a constraint, that is, of maintaining the weight vector on some constraint surface. We now consider the type of constraint to be enforced. We will focus on two types. First, a constraint may conserve the total synaptic strength Piwi. We refer to this as a type 1 constraint, and to a multiplicative or subtractive constraint of this type as M1 or S1 respectively. These are frequently used in modeling studies, (e.g. M1: [5, 22, 23, 24, 34, 35, 41, 43, 44]; S1: [27, 28]). We de ne a type 1 constraint more generally as one that conserves the dot product n w for some constant constraint vector n; when n is the DC vector, the conserved quantity is the total synaptic strength. A type 1 constraint corresponds to a hyperplane constraint surface. We choose the subtracted vector n in Eq. 4 to be the same as the constraint vector n. Then type 1 constraints can be achieved by choosing M1: (w) = n Cw=n w (with n w(t=0) 6= 0) (5) S1: (w) = n Cw=n n (6) These choices yield n d dtw= d dt (n w)=0 under Eqs. 3 or 4 respectively. Second, we consider a constraint that conserves the sum{squared synaptic strength, Piw2 i = w w; this corresponds to a hypersphere constraint surface. We refer to this as a type 2 constraint (the numbers `1' and `2' refer to the exponent p in the constrained quantityPwp i ). This constraint, while not biologically motivated, is often used in theoretical studies, e.g. [17, 32]. We will consider only multiplicative enforcement of this constraint,4 called M2. M2 can be achieved by choosing M2: (w) = w Cw=w w (7) This yields 2w d dtw= d dt (w w)=0 under Eq. 3. 3To understand why the term (w)w(t) represents multiplicative constraints, consider a multiplicatively constrained equation w(t + t) = (w) [w(t) +Cw(t) t], where (w) achieves the constraint. This is identical to w(t+ t) w(t) t =Cw(t) (w)w(t+ t) where (w)= 1 (w) (w) t . For t! 0 this becomes Eq. 3. 4Subtractive enforcement, S2, does not work in the typical case (Theorem 4) in which the xed points are unstable. The constraint fails where n is tangent to the constraint hypersphere, i.e. at points where w n=0. Such points form a circumference around the hypersphere. The S2 dynamics ow away from the unstable xed points, at opposite poles of the hypersphere, and ow into this circumference unless prevented by the bounds on synaptic weights. S2 constraints work if weights are non-negative (wmin = 0) and n= (1; 1; : : : ; 1)T; but typically type 2 constraints are used in situations in which weights may take either sign. 8 c s Cw PCw s ß _ Figure 2: Projection Onto the Constraint Surface The projection operator is P= 1 scT s c . This acts on the unconstrained derivative Cw by removing its c component, projecting the dynamics onto the constraint surface c PCw=c d dtw=0. This constraint surface is shown as the line perpendicular to c. The constraint is enforced through subtraction of a multiple of s: PCw=Cw s where = c Cw=c s. For multiplicative constraints, s=w; for subtractive constraints, s=n. Each form of constrained dynamics can be written d dtw=PCw, where P is a projection operator that projects the unconstrained dynamics onto the constraint surface. For S1, the projection operator is P=1 nnT n n ; for M1, it is P=1 wnT w n ; and for M2, it is P=1 wwT w w . We can write these operators as P=1 scT s c , where s is the subtracted vector, c the constraint vector, and 1 the identity matrix (Fig. 2). The projection operator removes the c component of the unconstrained derivative Cw, through subtraction of a multiple of s. The subtracted vector s represents the method of constraint enforcement: s=w for multiplicative constraints, while s=n for subtractive constraints. The constraint vector c determines the constraint that is enforced: the dynamics remain on the constraint surface c w=constant. When the total synaptic strength is conserved (type 1 constraint), the constraint vector is c=n; while if the sum{squared strength is conserved (type 2 constraint), the constraint vector is c=w. Note in Fig. 2 that S1 (s= c=n) represents perpendicular projection onto the constraint surface;5 M1 (s=w, c=n) is projection along w; and M2 (s=c=w) is both perpendicular projection and projection along w. 2.3 Dynamical e ects of multiplicative and subtractive constraints We begin by illustrating in Fig. 3 the typical dynamics under M1, S1, and M2 in the plane formed by the principal eigenvector of C, e0, and one other eigenvector with positive eigenvalue, e1. We have illustrated two typical cases for M1 and S1 constraints: In Fig. 3A we have drawn the principal eigenvector e0 close in direction to the M1 and S1 constraint vector n. This is typical for Hebbian learning when there are only positive correlations 5One could also consider a non-perpendicular form of S1 constraints, S10, in which s and c=n are constants but s 6= c (and s c 6= 0). The results for S1 constraints can be generalized for S10 constraints without major changes. S1 constraints may turn into S10 when viewed in alternative synaptic representations; see Appendix B. 9 and the total synaptic sum is conserved. Positive correlations lead to a principal eigenvector in which all weights have a single sign; this is close in direction to the DC constraint vector (1; 1; : : : ; 1)T, which conserves total synaptic strength. Multiplicative and subtractive constraints lead to very di erent outcomes in this case: multiplicative constraints lead to convergence to e0, whereas subtractive constraints lead to unstable ow. The outcome in this case was illustrated in Figs. 1A,B. In Fig. 3B we have drawn the principal eigenvector e0 perpendicular to the M1 and S1 constraint vector n. This is typical for Hebbian learning when correlations among input activities oscillate in sign as a function of input separation [25]. Such oscillations lead to a principal eigenvector in which weights oscillate in sign and sum approximately to zero. In this case, the type of constraint enforcement makes little di erence: the weight vector typically ows to a saturated version of the principal eigenvector. Under M2 constraints (Fig. 3C), the dynamics converge to e0. Figure 3 may be taken as a visual aid for the remainder of this section. 2.3.1 General di erences between multiplicative and subtractive constraint enforcement: Fixed points and stability The results of this subsection apply to any multiplicatively or subtractively constrained dynamics that are described by Eq. 3 or 4. Our overall conclusions are as follows: Conclusion 1 Under a multiplicatively enforced constraint, the weight vector w generally converges to a multiple of the principal eigenvector e0 that satis es the constraint, provided such a point exists. Conclusion 2 Under a subtractively enforced constraint, the dynamics have no stable outcome within the hypercube of allowed synaptic weights. Generally all weights saturate at wmax or wmin. To establish these points, we will examine the locations and stability of the xed points that are in the interior of the hypercube of allowed synaptic weights (\interior xed points"). A xed point wFP is a point where the ow d dtw= 0. If C is symmetric the constrained dynamics must either ow to a stable interior xed point or else ow to the hypercube. The locations of the xed points follow trivially from Eqs. 3-4 and the fact that the dynamics remain on the constraint surface: Theorem 1 Under a multiplicatively enforced constraint, the interior xed points are the intersections of the eigenvectors of C with the constraint surface, that is, the points w on the constraint surface at which Cw / w. Theorem 2 Under a subtractively enforced constraint, the interior xed points are the points w on the constraint surface at which Cw / n. 10 e1 e0 M1 e1 e0 S1 e1 e0 UNCON. e1 e0 M1 e1 e0 S1
منابع مشابه
Anti-Hebbian Spike-Timing-Dependent Plasticity and Adaptive Sensory Processing
Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity...
متن کاملHebbian learning and development.
Hebbian learning is a biologically plausible and ecologically valid learning mechanism. In Hebbian learning, 'units that fire together, wire together'. Such learning may occur at the neural level in terms of long-term potentiation (LTP) and long-term depression (LTD). Many features of Hebbian learning are relevant to developmental theorizing, including its self-organizing nature and its ability...
متن کاملA Hybrid Unconscious Search Algorithm for Mixed-model Assembly Line Balancing Problem with SDST, Parallel Workstation and Learning Effect
Due to the variety of products, simultaneous production of different models has an important role in production systems. Moreover, considering the realistic constraints in designing production lines attracted a lot of attentions in recent researches. Since the assembly line balancing problem is NP-hard, efficient methods are needed to solve this kind of problems. In this study, a new hybrid met...
متن کاملAssociative (not Hebbian) learning and the mirror neuron system.
The associative sequence learning (ASL) hypothesis suggests that sensorimotor experience plays an inductive role in the development of the mirror neuron system, and that it can play this crucial role because its effects are mediated by learning that is sensitive to both contingency and contiguity. The Hebbian hypothesis proposes that sensorimotor experience plays a facilitative role, and that i...
متن کاملA Model Analysis of Temporally Asymmetric Hebbian Learning
Among a lot of models for learning in neural networks, Hebbian and anti-Hebbian learnings might be the most familiar ones. Although there are many variants, the most typical paradigms are such that when preand post-synaptic activations (firing) occur at the same time, synaptic efficacy is increased (Hebbian) or decreased (anti-Hebbian). According recent neurophysiological observations, however,...
متن کاملReal-time Hebbian Learning from Autoencoder Features for Control Tasks
Neural plasticity and in particular Hebbian learning play an important role in many research areas related to artficial life. By allowing artificial neural networks (ANNs) to adjust their weights in real time, Hebbian ANNs can adapt over their lifetime. However, even as researchers improve and extend Hebbian learning, a fundamental limitation of such systems is that they learn correlations betw...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural Computation
دوره 6 شماره
صفحات -
تاریخ انتشار 1994